home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Amiga Mag HDD Backup
/
Amiga Mag HDD Backup.zip
/
Amiga Mag HDD Backup
/
Alexander.img.bin
/
Alexander.img
/
*********10.11
/
AC 10.11 Archive.sit
/
ChestNet.ascii
< prev
next >
Wrap
Text File
|
1995-11-16
|
11KB
|
296 lines
Chest Net -- An Amiga based Neural Network for Radiology
Michael Tobin, M.D., Ph.D.
I have been intrigued for some time by the possibility of
computers making medical diagnoses. I also felt that the Amiga,
being the rather nifty little computer that it is, could handle the
challenge -- if given a little medical training, of course.
I thought that the information density on a standard chest
radiograph was too much for a small personal computer to analyze.
So, I decided to take a different approach and describe to the Amiga
typical appearance of each of several chest diseases. I then planned
to input data from an actual patient and ask for Amy's opinion as to
which disease it most closely resembled.
Neural Networks
Neural networks are widely used for making predictions. They are
therefore used in areas as diverse as financial markets, economics,
and even weather forecasting, to name just a few. While not every
problem seems amenable to a neural network approach, the results can
be impressive for those that do.
The neural network is essentially a "black box." In its simplest
incarnation, a neural network has three layers. You train it by
putting data into its Input Layer and telling it what the Output
Layer should be in each case by putting the "correct answer" in the
Target Layer. The input data is presented over and over again until
the Output_Layer matches the Target Layer. The neural network
achieves this goal by modifying a Hidden Layer, which sits between
the Input and Output layers and has connections with each cell in
both layers (Figure 1). Training the network, then, involves
strengthening or weakening the connections to and from the Hidden
Layer. Once trained, the network is tested by putting data from an
"unknown" into the Input Layer and reading the Output Layer for the
result.
But is it available on the Amiga?
NeuroPro 2.0 has been available from MegageM since 1992. It is a
three layer system, with a maximum of 256 inputs and outputs. Valid
data would include small images, linear arrays of 256 pixels either
turned "on" or left "off" using a paint program, and ASCII text.
ASCII text is limited to 32 characters (8 bits per character) and
must be right justified to position 32. To train the system, one
needs an Input file, a Target file (the "right answer," so to speak)
or a Pair file, with lines alternating between input data and target
data. In my case, I decided to create Pair files with entries that
looked like
100051110000111100000111100004011
Tuberculosis
222000111000822110000000000111003
CongestiveHeartFailure
etc
where each number in the 32 number string represents some feature of
the x-ray.
After training the system by presenting the Input and the Target
data anywhere from 100 to 1000 times -- done automatically by the
program, one tests the network by giving it an Input file with
numbers hopefully similar enough to those the system was trained on
so that it will recognize them as belonging to one of the diseases
already in its list.
What did you say the 15th character meant?
Well that's the point. The 15th ASCII character may have meant
that the x-ray was that of an adult or maybe it meant that there was
a pneumonia in the left lung. And good luck in not making a mistake
and putting what was supposed to be in the 15th slot into the 16th
one instead. And who really would want to use the system on a
routine basis if one had to open a text editor and type in 32
characters while looking at a chart to see what everything meant.
What is needed is a good interface.
What is an Amiga user to do?
First, I used Helm (Eagle Tree Software) to create an interface,
which I call Chest Net, filled with radio buttons and other types of
selectors so that when a user chooses, for example, the button
corresponding to "adult," a number is generated which corresponds to
that choice (Figure 2). When the user is finished checking off the
boxes that describe an x-ray, the Chest Net uses Helm's scripting
language to "read" which buttons were checked for each option and
then uses AREXX to create a file with the corresponding numbers.
NeuroPro can then be launched from Chest Net to read the file and use
the data (Figure 3).
Chest Net provides options for creating, adding to, or replacing
data in a Pair File which it then stores on the system disk. Chest
Net can provide a list of diseases already in the Pair file (Figure
4). It can then load the data of a specific disease from the Pair
file back into itself so that the user can see what boxes were
checked off originally. It can make NeuroPro train itself (i.e.,
generate a network) using the Pair file. Once the network has been
trained, Chest Net can be used to transmit an "unknown" from an
actual patient to the network to get a diagnosis. Finally, there is
even an option to view representative images (Figure 5).
So, how do you use it?
Well presumably as the "expert," I would be the one to define
what is seen in tuberculosis, sarcoid, asbestos disease, etc. and
then I would train the network. You, as a user, would check the
boxes corresponding to your "unknown" chest x-ray and then push the a
button to get NeuroPro's opinion. As a user, you could still modify
the Pair file in case your opinion of what congestive heart failure
looks like is different than mine. You could also add to the Pair
file diseases I chose not to include. If you do make changes, you
will have to train the network all over again. This is a time
consuming operation even with an Amiga with a Motorola 68040 CPU.
Well, does it work?
Not only does it work, but it has worked well enough to amaze
those I have shown it to. The "unknowns" do have to be similar to
the diseases in the Pair file or else NeuroPro's output becomes a
gibberish of letters and characters.
Are there limitations?
There are definite limitations. Some of these are related to the
complexity of Radiology while others are inherent in neural networks
or in this Amiga implementation of them.
First, neural networks are limited by the expertise of the person
who provides the training data (i.e., the Pair file). Ultimately,
computers are no smarter than we are. If I don't know the typical
radiographic appearance of PCP pneumonia, then neither will Chest
Net.
Second, there may be (and often are!) a variety of radiographic
presentations for the same disease. A patient with PCP pneumonia
can, for example, have a perfectly normal looking chest x-ray! Such
patients will usually, however, have abnormal gallium scans, abnormal
arterial blood gas values, positive findings on lung washings
performed via bronchoscopy and problems with their immune systems.
If such possibilities are considered in Chest Net, multiple
presentations of the same disease could be entered as distinct
entities and be given labels like PCP.1, PCP.2, etc.
Third, just as one disease can have multiple presentations,
several diseases can have essentially the same radiographic pattern.
Indeed, this turns out to be quite common and the radiologist is then
unable to give a specific diagnosis. Rather, only a list of
diagnostic possibilities can be given with some possibilities more
likely than others, depending on other factors. Sometimes clinical
history is helpful. Does the patient have a fever? Was the
abnormality present throughout the patient's life? If we provide a
"clinical history" selector with these possibilities given as
options, we may be able to handle this complication. However,
sometimes even knowing the clinical history will not permit a
specific diagnosis. Yet this is what we are usually asking the
neural network to do.
My fourth point would be that each x-ray is a moment in time.
Knowing what the previous radiograph looked like is very important.
Is the patient improving on antibiotics or is a "water pill" making
the x-ray better? Of course, we could allow for the results of prior
studies in the program but .....
... and this brings me to my fifth point. Thirty-two descriptors
for all chest diseases, is really a rather small number to be
restricted to. To describe a chest radiograph adequately, we could
easily justify a number 2 or 3 times larger. One solution might be
what I call an "adaptive neural network system". By this I mean that
selection of certain boxes in Chest Net could, in fact, lead to a
variant set of questions -- and a different network -- thereby
greatly extending the 32 character limitation. So, if you selected
"Newborn" under the "Age Group" Selector, you might see different
questions than if you had selected "Adult."
Finally, there is no way, at least in this version of NeuroPro,
to "weight" any of the 32 inputs so that some are more important than
others. Suppose, that a certain disease only occurs in women. There
is no sense in allowing NeuroPro to suggest as a diagnosis a disease
that only occurs in men! Thus the sex of the patient may be
absolutely irrelevant for one disease and be totally determining in
another with everything in the middle being possible. There is no
smooth way that I see for doing this in NeuroPro at this time.
So what is the bottom line?
The bottom line, is that Chest Net is potentially useful,
providing we recognize its limitations. A busy radiologist is
probably not going to have either the time or the need to consult a
neural network for most cases. There will always be challenging
radiographs and it may be useful to find out what the computer has to
say especially if a colleague is unavailable.
The situation with radiology residents in training may be
different. If nothing else, Chest Net is organized and thorough.
One resident felt that it could help him become more structured in
his approach to a chest film and less likely to miss findings.
Perhaps, I am the one after all who got the most out of Chest Net
because writing it forced me to think about how I think and what I
teach. Being forced to 32 descriptors, I was compelled to
concentrate on crucial factors that can make or break a diagnosis.
Finally, it is worthwhile remembering that a patient is a person
while an x-ray is just an image. Consequently, one never treats an
x-ray abnormality, only a patient. If an x-ray provides unusual or
unexpected results, the x-ray is repeated to exclude the possibility
of artefact. Experience, good judgement and empathy. Now how can I
use AREXX to program that?